perm filename SCHWAR[F87,JMC] blob sn#850878 filedate 1987-12-28 generic text, type T, neo UTF8
Notes on Jack Schwartz's

Limits of Artificial Intelligence

I don't agree with the opinion I think you expressed to the effect
 that Knuthian algorithmics was the way to go
in AI; your paper illustrated the point with comments about obstacle
avoidance.

Here's the formulation I disagree with.

``The main way to make progress in AI is to separate out from the confused goals
sharp algorithmic problems and find sharp algorithmic (or better analytic)
solutions.''  The ideal would be the mathematical solution of the game of nim.
My opionion is that too few of the mechanisms needed for intelligence
correspond to solvable mathematical problems.

The most obvious counterexample is chess.  This has been studied for many
years, and Berliner's Hitech recently won the Pennsylvania championship
in competition with human players, giving it a strong master
rating.  Of course, it used more brute force than I would consider aesthetic
or genuinely necessary, but it used a lot of chess lore too.  This chess lore
is partly conventional, i.e. the kind of information accumulated by chess
masters and published in chess books.  The other part of the information is
algorithmic --- discovered by people writing chess programs.  Although many
excellent mathematicians have been chess masters, and many excellent
mathematicians have been involved with writing chess programs (especially
Russians), none of the advances involve conventional mathematical discoveries,
and there has been little that could be called a conventional computer science
discovery either.

	Two of the early chess discoveries illustrate the point --- both
are partly mine.  The first is the alpha-beta heuristic, which I
discovered in 1956, though in a form that mixed it with another idea.
The matter was clarified about 1960 by Tim Hart and Mike Levin, who were
at the M.I.T. AI Lab.  Unfortunately, they didn't publish, and the first
publication was by the Soviet mathematician Alexander Brudno in 1963 who
discovered the idea independently.  The idea is to enter one's analysis
of a chess position X in which player P is to move with two numbers α
and β with α < β.  α represents the value P can achieve by already
analyzed moves that avoid reaching X, β is the value his opponent can
hold him to by already analyzed moves that avoid X.  If the value X is
ever determined to be outside the range (α,β), X need not be further
analyzed.  Moreover, if the value is less than α, no further moves need
be analyzed at the previous level.  In order to save the maximum search,
the alpha-beta heuristic needs to be used with a good mover-orderer that
tries moves for each player best-first as much as possible.  If the
move-orderer is successful enough, the number of final positions
examined is about the square root of the number that would otherwise be
examined.

	Curiously enough the alpha-beta heuristic was analyzed in mathematical
detail and with excellent historical scholarship by the afore-mentioned Donald
Knuth.  Unfortunately, his analysis was almost entirely beside the point, because
he assumed that the moves were tried in random order rather than in some
approximation to best-first.  I presume this was because the random order
was mathematically tractable, full best-first was mathematically trivial and
had already been done, and an approximate best-first was too hard to formulate
in a mathematical way.

	The other heuristic is the ``killer heuristic'' used in move ordering.
The idea is to maintain a list of moves that have been successful in parallel
positions and try them first.  The killer heuristic makes alpha-beta work
better.  Paul Abrahams did some experiments with this when he was an M.I.T.
graduate student but didn't publish.  However, the killer heuristic is used
in many modern chess programs.  It goes beyond the abstract tree model of a
game in that it requires identifying the ``same move'' in different positions.

	The point of this discussion is that improvements in artificial
intelligence often don't take the form of conventional mathematical discoveries.
Of course, there is no excuse for ignoring relevant mathematics as
some of the early vision work ignored projective geometry.

	My further opinion is that your discussion of robotics is over-optimistic
in its expectation that successful manipulation requires mainly more
conventional algorithms.  I believe representing what a robot can know
and needs to know about physical objects will require conceptual advances.

	The second major disagreement concerns the presumption that combinatorial
explosion is what mainly limits the performance of AI programs.  This error makes you
miss the point of my main work in AI and the work of just about everyone
else working to apply logic.

	The main problem is ``epistemological'' rather than ``heuristic''
to use the terminology of my 1970 paper with Pat Hayes.  We don't yet know
how to characterize the information a human can get and use and which
a computer program in a similar situation can get and use.


Dear Jack:

	Our discussion and your paper lead to the following points.

	1. The main difficulty in developing and applying AI is
epistemology not heuristics.  The problem is not that the programs
are too slow because of combinatorial difficulties but that they
are too limited, because the sets of functions and predicates being
used are too narrow to express the common sense or specialized knowledge
that needs to be used.  My paper ``Epistemological Problems of AI''
gives the 1977 status of the formalization problem and my 1983 ``Some
Expert Systems Need Common Sense'' illustrates the epistemological
limitations of MYCIN and other expert systems --- the same program
you criticize in your paper.

	2. Knuthian algorithmics has not made much contribution to AI
so far and is unlikely to do so.  Chess provides interesting illustrations.
The alpha-beta